Jointly optimized discriminative features for speech recognition

نویسندگان

  • Tim Ng
  • Bing Zhang
  • Long Nguyen
چکیده

In the past decade, methods to extract long-term acoustic features for speech recognition using Multi-Layer Perceptrons have been proposed. These features have been proved to be good complementary features in some feature augmentations and/or through system combination. Usually, conventional linear dimension reduction algorithms, e.g. Linear Discriminative Analysis, are not applied on the combined features. In this paper, Region Dependent Transform is applied to jointly optimize the feature combination under a discriminative training criterion. When compared to a conventional augmentation, 3% to 6% relative character error rate reduction for Mandarin speech recognition has been achieved using Region Dependent Transform.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An Information-Theoretic Discussion of Convolutional Bottleneck Features for Robust Speech Recognition

Convolutional Neural Networks (CNNs) have been shown their performance in speech recognition systems for extracting features, and also acoustic modeling. In addition, CNNs have been used for robust speech recognition and competitive results have been reported. Convolutive Bottleneck Network (CBN) is a kind of CNNs which has a bottleneck layer among its fully connected layers. The bottleneck fea...

متن کامل

Design of Detectors for Automatic Speech Recognition

This thesis presents methods and results for optimizing subword detectors in continuous speech. Speech detectors are useful within areas like detectionbased ASR, pronunciation training, phonetic analysis, word spotting, etc. Firstly, we propose a structure suitable for subword detection. This structure is based on the standard HMM framework, but in each detector the MFCC feature extractor and t...

متن کامل

Optimization on decoding graphs by discriminative training

The three main knowledge sources used in the automatic speech recognition (ASR), namely the acoustic models, a dictionary and a language model, are usually designed and optimized in isolation. Our previous work [1] proposed a methodology for jointly tuning these parameters, based on the integration of the resources as a finite-state graph, whose transition weights are trained discriminatively. ...

متن کامل

Large-scale, sequence-discriminative, joint adaptive training for masking-based robust ASR

Recently, it was shown that the performance of supervised timefrequency masking based robust automatic speech recognition techniques can be improved by training them jointly with the acoustic model [1]. The system in [1], termed deep neural network based joint adaptive training, used fully-connected feedforward deep neural networks for estimating time-frequency masks and for acoustic modeling; ...

متن کامل

Combining exemplar-based matching and exemplar-based sparse representations of speech

In this paper, we compare two different frameworks for exemplarbased speech recognition and propose a combined system that approximates the input speech as a linear combination of exemplars of variable length. This approach allows us not only to use multiple length long exemplars, each representing a certain speech unit, but also to jointly approximate input speech segments using several exempl...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010